深度学习和转移学习的进步为农业的各种自动化分类任务铺平了道路,包括植物疾病,害虫,杂草和植物物种检测。然而,农业自动化仍然面临各种挑战,例如数据集的大小和缺乏植物域特异性预处理模型。特定于域的预处理模型显示了各种计算机视觉任务的最先进的表现,包括面部识别和医学成像诊断。在本文中,我们提出了Agrinet数据集,该数据集是来自19个地理位置的160k农业图像的集合,几个图像标题为设备,以及423种以上的植物物种和疾病。我们还介绍了Agrinet模型,这是一组预处理的模型:VGG16,VGG19,Inception-V3,InceptionResnet-V2和Xception。 Agrinet-VGG19的分类准确性最高的94%,最高的F1分数为92%。此外,发现所有提出的模型都可以准确地对423种植物物种,疾病,害虫和杂草分类,而Inception-V3模型的精度最低为87%。与ImageNet相比,实验以评估Agrinet模型优势的实验在两个外部数据集上进行了模型:来自孟加拉国的害虫和植物疾病数据集和来自克什米尔的植物疾病数据集。
translated by 谷歌翻译
群的行为来自代理的局部互动及其环境通常被编码为简单规则。通过观看整体群体行为的视频来提取规则可以帮助我们研究和控制自然界的群体行为,或者是由外部演员设计的人造群体。它还可以作为群体机器人技术灵感的新来源。然而,提取此类规则是具有挑战性的,因为群体的新兴特性与其当地互动之间通常没有明显的联系。为此,我们开发了一种方法,可以自动从视频演示中提取可理解的群体控制器。该方法使用由比较八个高级群指标的健身函数驱动的进化算法。该方法能够在简单的集体运动任务中提取许多控制器(行为树)。然后,我们对导致不同树木但类似行为的行为进行定性分析。这提供了基于观察值自动提取群体控制器的第一步。
translated by 谷歌翻译
能量分解估计的单仪表逐一逐个电能量,以衡量整个房屋的电力需求。与侵入性负载监测相比,尼尔姆(非侵入性负载监控)是低成本,易于部署和灵活的。在本文中,我们提出了一种新方法,即创建的IMG-NILM,该方法利用卷积神经网络(CNN)来分解表示为图像的电力数据。事实证明,CNN具有图像有效,因此,将数据作为时间序列而不是传统的电力表示,而是将其转换为热图,而较高的电读数则被描绘成“更热”的颜色。然后在CNN中使用图像表示来检测来自聚合数据的设备的签名。 IMG-NILM是灵活的,在分解各种类型的设备方面表现出一致的性能;包括单个和多个状态。它在单个房屋内的英国戴尔数据集中达到了高达93%的测试准确性,那里有大量设备。在从不同房屋中收集电力数据的更具挑战性的环境中,IMG-NILM的平均准确度也非常好,为85%。
translated by 谷歌翻译
Deep Metric Learning (DML) learns a non-linear semantic embedding from input data that brings similar pairs together while keeping dissimilar data away from each other. To this end, many different methods are proposed in the last decade with promising results in various applications. The success of a DML algorithm greatly depends on its loss function. However, no loss function is perfect, and it deals only with some aspects of an optimal similarity embedding. Besides, the generalizability of the DML on unseen categories during the test stage is an important matter that is not considered by existing loss functions. To address these challenges, we propose novel approaches to combine different losses built on top of a shared deep feature extractor. The proposed ensemble of losses enforces the deep model to extract features that are consistent with all losses. Since the selected losses are diverse and each emphasizes different aspects of an optimal semantic embedding, our effective combining methods yield a considerable improvement over any individual loss and generalize well on unseen categories. Here, there is no limitation in choosing loss functions, and our methods can work with any set of existing ones. Besides, they can optimize each loss function as well as its weight in an end-to-end paradigm with no need to adjust any hyper-parameter. We evaluate our methods on some popular datasets from the machine vision domain in conventional Zero-Shot-Learning (ZSL) settings. The results are very encouraging and show that our methods outperform all baseline losses by a large margin in all datasets.
translated by 谷歌翻译
Recent work has shown the benefits of synthetic data for use in computer vision, with applications ranging from autonomous driving to face landmark detection and reconstruction. There are a number of benefits of using synthetic data from privacy preservation and bias elimination to quality and feasibility of annotation. Generating human-centered synthetic data is a particular challenge in terms of realism and domain-gap, though recent work has shown that effective machine learning models can be trained using synthetic face data alone. We show that this can be extended to include the full body by building on the pipeline of Wood et al. to generate synthetic images of humans in their entirety, with ground-truth annotations for computer vision applications. In this report we describe how we construct a parametric model of the face and body, including articulated hands; our rendering pipeline to generate realistic images of humans based on this body model; an approach for training DNNs to regress a dense set of landmarks covering the entire body; and a method for fitting our body model to dense landmarks predicted from multiple views.
translated by 谷歌翻译
To generate high quality rendering images for real time applications, it is often to trace only a few samples-per-pixel (spp) at a lower resolution and then supersample to the high resolution. Based on the observation that the rendered pixels at a low resolution are typically highly aliased, we present a novel method for neural supersampling based on ray tracing 1/4-spp samples at the high resolution. Our key insight is that the ray-traced samples at the target resolution are accurate and reliable, which makes the supersampling an interpolation problem. We present a mask-reinforced neural network to reconstruct and interpolate high-quality image sequences. First, a novel temporal accumulation network is introduced to compute the correlation between current and previous features to significantly improve their temporal stability. Then a reconstruct network based on a multi-scale U-Net with skip connections is adopted for reconstruction and generation of the desired high-resolution image. Experimental results and comparisons have shown that our proposed method can generate higher quality results of supersampling, without increasing the total number of ray-tracing samples, over current state-of-the-art methods.
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
Deep neural networks (DNNs) are vulnerable to a class of attacks called "backdoor attacks", which create an association between a backdoor trigger and a target label the attacker is interested in exploiting. A backdoored DNN performs well on clean test images, yet persistently predicts an attacker-defined label for any sample in the presence of the backdoor trigger. Although backdoor attacks have been extensively studied in the image domain, there are very few works that explore such attacks in the video domain, and they tend to conclude that image backdoor attacks are less effective in the video domain. In this work, we revisit the traditional backdoor threat model and incorporate additional video-related aspects to that model. We show that poisoned-label image backdoor attacks could be extended temporally in two ways, statically and dynamically, leading to highly effective attacks in the video domain. In addition, we explore natural video backdoors to highlight the seriousness of this vulnerability in the video domain. And, for the first time, we study multi-modal (audiovisual) backdoor attacks against video action recognition models, where we show that attacking a single modality is enough for achieving a high attack success rate.
translated by 谷歌翻译
Modern deep neural networks have achieved superhuman performance in tasks from image classification to game play. Surprisingly, these various complex systems with massive amounts of parameters exhibit the same remarkable structural properties in their last-layer features and classifiers across canonical datasets. This phenomenon is known as "Neural Collapse," and it was discovered empirically by Papyan et al. \cite{Papyan20}. Recent papers have theoretically shown the global solutions to the training network problem under a simplified "unconstrained feature model" exhibiting this phenomenon. We take a step further and prove the Neural Collapse occurrence for deep linear network for the popular mean squared error (MSE) and cross entropy (CE) loss. Furthermore, we extend our research to imbalanced data for MSE loss and present the first geometric analysis for Neural Collapse under this setting.
translated by 谷歌翻译